A hardware MP3 decoder with low precision floating point intermediate storage

نویسندگان

  • Andreas Ehliar
  • Johan Eilert
  • Mikael Olausson
چکیده

The effects of using limited precision floating point for intermediate storage in an embedded MP3 decoder are investigated in this thesis. The advantages of using limited precision is that the values need shorter word lengths and thus a smaller memory for storage. The official reference decoder was modified so that the effects of different word lengths and algorithms could be examined. Finally, a software and hardware prototype was implemented that uses 16-bit wide memory for intermediate storage. The prototype is classified as a limited accuracy MP3 decoder. Only layer iii is supported. The decoder could easily be extended to a full precision MP3 decoder if a corresponding increase in memory usage was accepted.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

MP3 decoding on FPGA: a case study for floating point acceleration

Reconfigurable devices are becoming an extremely attractive means for computing and prototyping. The main reasons for their popularity are fast turnaround time, cheap implementation (low/medium volume), easier design flow (no physical implementation or fabrication involvement), and great speed-up potential through application parallelism mapping on the reconfigurable fabric. Moreover, with embe...

متن کامل

Low cost floating-point unit design for audio applications

This paper presents a low-cost, single-cycle floating-point unit developed for digital audio processing applications. In the unit, the serial steps of floating-point operations are paralleled to reduce critical path delay, and hardware resources are shared with the integer datapath to minimize area overhead. Its area overhead is as small as 38% of the fixed-point datapath, and the critical path...

متن کامل

Analysing Single Precision Floating Point Multiplier on Virtex 2P Hardware Module

FPGAs are increasingly being used in the high performance and scientific computing community to implement floating-point based hardware accelerators. We present FPGA floating-point multiplication. Such circuits can be extremely useful in the FPGA implementation of complex systems that benefit from the reprogramability and parallelism of the FPGA device but also require a general purpose multipl...

متن کامل

Unrestricted Algorithms for Elementary and Special Functions

Floating-point computations are usually performed with fixed precision: the machine used may have “single” or “double” precision floating-point hardware, or on small machines fixed-precision floating-point operations may be implemented by software or firmware. Most high-level languages support only a small number of floating-point precisions, and those which support an arbitrary number usually ...

متن کامل

Deep Convolutional Neural Network Inference with Floating-point Weights and Fixed-point Activations

Deep convolutional neural network (CNN) inference requires significant amount of memory and computation, which limits its deployment on embedded devices. To alleviate these problems to some extent, prior research utilize low precision fixed-point numbers to represent the CNN weights and activations. However, the minimum required data precision of fixed-point weights varies across different netw...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2003